Skip to content

Develop#609

Merged
MervinPraison merged 2 commits intomainfrom
develop
Jun 5, 2025
Merged

Develop#609
MervinPraison merged 2 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Jun 5, 2025

PR Type

enhancement, bug_fix, tests


Description

  • Bump PraisonAI version to 2.2.29 across all relevant files

    • Update Dockerfiles, README, pyproject.toml, and Ruby formula
    • Ensure dependency on praisonaiagents >=0.0.101
  • Enhance agent and memory modules for improved LLM/model handling

    • Use LiteLLM for quality metrics in memory
    • Refine model extraction logic in task execution
    • Optimize TaskOutput instantiation in agent guardrail logic
  • Add new example and test scripts for PraisonAI Agents

    • Introduce guardrail agent example
    • Provide a test script for agent/task execution
  • Update dependencies and packaging configuration

    • Add litellm to memory requirements
    • Adjust setuptools config for package discovery

Changes walkthrough 📝

Relevant files
Enhancement
11 files
guardrail_agent_example.py
Add example for agent guardrail validation                             
+14/-0   
agent.py
Optimize TaskOutput instantiation in guardrail logic         
+2/-5     
memory.py
Use LiteLLM for quality metrics calculation                           
+7/-4     
pyproject.toml
Bump version, add litellm, update setuptools config           
+6/-4     
pyproject.toml
Bump PraisonAI and praisonaiagents versions                           
+4/-4     
deploy.py
Update Dockerfile generation to use new version                   
+1/-1     
praisonai.rb
Update Ruby formula for new PraisonAI version                       
+2/-2     
Dockerfile
Bump PraisonAI version to 2.2.29                                                 
+1/-1     
Dockerfile.chat
Bump PraisonAI version to 2.2.29                                                 
+1/-1     
Dockerfile.dev
Bump PraisonAI version to 2.2.29                                                 
+1/-1     
Dockerfile.ui
Bump PraisonAI version to 2.2.29                                                 
+1/-1     
Bug fix
1 files
task.py
Refine model extraction for quality metrics                           
+5/-1     
Tests
1 files
test.py
Add test script for agent/task execution                                 
+33/-0   
Documentation
1 files
README.md
Update PraisonAI version references in docs                           
+2/-2     

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features
      • Added an example script demonstrating agent guardrail validation.
      • Introduced a test script showcasing agent workflow setup and execution.
    • Bug Fixes
      • Improved logic for model selection and quality metric calculation in agent tasks and memory.
    • Chores
      • Updated PraisonAI and PraisonAIAgents package versions to latest releases.
      • Updated Dockerfiles and documentation to require the latest package versions.
      • Enhanced package discovery for PraisonAIAgents.
      • Updated ignore patterns for development files.

    - Incremented version of PraisonAI from 0.0.99 to 0.0.101 in pyproject.toml and uv.lock.
    - Added 'litellm' dependency to memory requirements for enhanced functionality.
    - Updated .gitignore to include 'CopilotKit*' for better file management.
    - Optimised TaskOutput instantiation in agent.py for clarity.
    - Refined memory handling in memory.py to utilise LiteLLM for consistency.
    - Improved model extraction logic in task.py for better fallback handling.
    - Incremented the version of PraisonAI from 2.2.28 to 2.2.29 in Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui).
    - Updated the version in the README.md and pyproject.toml files to reflect the new version.
    - Adjusted the deploy.py script to install the updated version of PraisonAI.
    - Ensured consistency across all relevant files for seamless integration.
    @coderabbitai
    Copy link
    Copy Markdown
    Contributor

    coderabbitai bot commented Jun 5, 2025

    Caution

    Review failed

    The pull request is closed.

    Walkthrough

    This update increments the PraisonAI and PraisonAIAgents package versions across Dockerfiles, deployment scripts, and project metadata. It introduces new example and test scripts for agent workflows and guardrail validation. Additionally, it transitions quality metric evaluation from OpenAI to LiteLLM, adjusts agent output construction, and updates dependency management.

    Changes

    Files / Groups Change Summary
    .gitignore Added ignore pattern for CopilotKit* files/directories.
    docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui Updated praisonai package version from 2.2.28 to 2.2.29 in pip install commands.
    docker/README.md Updated PraisonAI version references from 2.2.28 to 2.2.29 in documentation and example commands.
    src/praisonai-agents/guardrail_agent_example.py Added new example script demonstrating agent guardrail validation and retry logic.
    src/praisonai-agents/praisonaiagents/agent/agent.py Simplified TaskOutput construction in guardrail retry logic to include only description and agent name.
    src/praisonai-agents/praisonaiagents/memory/memory.py Switched quality metric evaluation from OpenAI client to LiteLLM, updated model default.
    src/praisonai-agents/praisonaiagents/task/task.py Improved logic for extracting model name from custom LLM instances in quality metric calculation.
    src/praisonai-agents/pyproject.toml Bumped version to 0.0.101, added litellm as optional memory dependency, switched to dynamic package discovery.
    src/praisonai-agents/test.py Added new test script for agent and task execution workflow.
    src/praisonai/praisonai.rb Updated PraisonAI formula version and source URL from v2.2.28 to v2.2.29.
    src/praisonai/praisonai/deploy.py Updated installed praisonai version in Dockerfile creation to 2.2.29.
    src/praisonai/pyproject.toml Updated project version to 2.2.29 and praisonaiagents dependency to >=0.0.101.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant GuardrailAgentExample.py
        participant Agent
        participant Guardrail (validate_content)
    
        User->>GuardrailAgentExample.py: Start script with prompt
        GuardrailAgentExample.py->>Agent: Create agent with guardrail and retry limit
        GuardrailAgentExample.py->>Agent: Start agent with prompt
        Agent->>Guardrail: Validate generated content
        alt Content fails guardrail
            Guardrail-->>Agent: Validation failed
            Agent->>Guardrail: Retry validation (up to 1 time)
        end
        Guardrail-->>Agent: Validation passed or retries exhausted
        Agent-->>GuardrailAgentExample.py: Return result
    
    Loading
    sequenceDiagram
        participant Memory
        participant LiteLLM
    
        Memory->>LiteLLM: Request completion for quality metrics (model: gpt-4o-mini)
        LiteLLM-->>Memory: Return JSON with quality metrics
        Memory-->>Memory: Parse and use metrics
    
    Loading

    Possibly related PRs

    Suggested labels

    Review effort 3/5

    Poem

    In the garden of code, a new day shines bright,
    With version bumps blooming, all packages light.
    Guardrails now guide, as agents compose,
    LiteLLM whispers where OpenAI once rose.
    Rabbits hop forward, with scripts in their paws—
    Testing and learning, without a pause!
    🐇✨


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 8923c72 and ee85a63.

    ⛔ Files ignored due to path filters (2)
    • src/praisonai-agents/uv.lock is excluded by !**/*.lock
    • src/praisonai/uv.lock is excluded by !**/*.lock
    📒 Files selected for processing (15)
    • .gitignore (1 hunks)
    • docker/Dockerfile (1 hunks)
    • docker/Dockerfile.chat (1 hunks)
    • docker/Dockerfile.dev (1 hunks)
    • docker/Dockerfile.ui (1 hunks)
    • docker/README.md (2 hunks)
    • src/praisonai-agents/guardrail_agent_example.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/agent/agent.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/memory/memory.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/task/task.py (1 hunks)
    • src/praisonai-agents/pyproject.toml (3 hunks)
    • src/praisonai-agents/test.py (1 hunks)
    • src/praisonai/praisonai.rb (1 hunks)
    • src/praisonai/praisonai/deploy.py (1 hunks)
    • src/praisonai/pyproject.toml (4 hunks)
    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    @MervinPraison MervinPraison merged commit 206a457 into main Jun 5, 2025
    10 of 13 checks passed
    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    Hello team, gemini-code-assist here with a summary of this pull request. This PR primarily focuses on updating the core PraisonAI and praisonaiagents package versions across the project. It also includes specific enhancements within the praisonaiagents package, such as integrating LiteLLM for memory operations, refining the handling of guardrail outputs, and improving how custom LLMs are managed in task callbacks. Additionally, new example files demonstrating guardrail usage and basic agent setup have been added. Various configuration and documentation files have been updated to reflect these changes.

    Highlights

    • Version Updates: The PraisonAI package is updated to version 2.2.29 and the praisonaiagents package is updated to 0.0.101. These updates are reflected in pyproject.toml, uv.lock, Dockerfiles, the Docker README, and the Homebrew formula.
    • LiteLLM Integration in Agents Memory: The praisonaiagents memory module (memory.py) now uses LiteLLM for calculating quality metrics, replacing a direct OpenAI client import. This likely provides more flexibility in choosing LLM providers for memory operations.
    • Guardrail Output Handling Refinement: The _apply_guardrail_with_retry method in praisonaiagents/agent.py has been updated to simplify the TaskOutput object created during guardrail processing, removing some fields and adding the agent name.
    • Improved Custom LLM Handling: The execute_callback method in praisonaiagents/task.py now includes logic to correctly extract the model name from custom LLM instances (like Ollama) and provides a default fallback model (gpt-4o-mini).
    • New Examples/Tests: Two new files, guardrail_agent_example.py and test.py, have been added to src/praisonai-agents/ to provide examples of using agents with guardrails and a basic agent/task setup with memory.

    Changelog

    Click here to see the changelog
    • .gitignore
      • Added CopilotKit* to ignore files/directories related to CopilotKit.
    • docker/Dockerfile
      • Updated praisonai dependency version to >=2.2.29.
    • docker/Dockerfile.chat
      • Updated praisonai dependency version to >=2.2.29.
    • docker/Dockerfile.dev
      • Updated praisonai dependency version to >=2.2.29.
    • docker/Dockerfile.ui
      • Updated praisonai dependency version to >=2.2.29.
    • docker/README.md
      • Updated PraisonAI version mention in 'Package Versions' to >=2.2.29.
      • Updated PraisonAI version in the 'Version Pinning' example to ==2.2.29.
    • src/praisonai-agents/guardrail_agent_example.py
      • Added a new example file demonstrating agent guardrail functionality.
    • src/praisonai-agents/praisonaiagents/agent/agent.py
      • Refactored TaskOutput creation within the guardrail retry logic (_apply_guardrail_with_retry) to simplify fields and add the agent name.
    • src/praisonai-agents/praisonaiagents/memory/memory.py
      • Switched from using an internal OpenAI client to LiteLLM for calculate_quality_metrics.
      • Changed the default model for quality metrics to gpt-4o-mini.
    • src/praisonai-agents/praisonaiagents/task/task.py
      • Added logic to extract the model name from custom LLM instances in execute_callback.
      • Added a default fallback model (gpt-4o-mini) when the model name cannot be determined from a custom LLM instance.
    • src/praisonai-agents/pyproject.toml
      • Updated praisonaiagents version to 0.0.101.
      • Added litellm>=1.50.0 to the memory extra dependency.
      • Updated setuptools package finding configuration.
    • src/praisonai-agents/test.py
      • Added a new basic test/example file for PraisonAIAgents.
    • src/praisonai-agents/uv.lock
      • Updated lock file to reflect praisonaiagents version 0.0.101.
      • Added litellm dependency for the memory extra.
    • src/praisonai/praisonai.rb
      • Updated Homebrew formula URL and SHA256 hash for version v2.2.29.
    • src/praisonai/praisonai/deploy.py
      • Updated the hardcoded praisonai version in the generated Dockerfile snippet to 2.2.29.
    • src/praisonai/pyproject.toml
      • Updated PraisonAI version to 2.2.29.
      • Updated praisonaiagents dependency version to >=0.0.101.
    • src/praisonai/uv.lock
      • Updated lock file to reflect praisonai version 2.2.29.
      • Updated lock file to reflect praisonaiagents dependency version 0.0.101.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


    Versions bump and climb,
    Dependencies align,
    Code evolves with time.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Model Fallback

    The code uses 'gpt-4o-mini' as a fallback model, but this should be consistent with the fallback used in task.py which also uses the same model. Consider extracting this default to a constant or configuration.

    model_name = llm or "gpt-4o-mini"
    Dynamic SHA256

    The Ruby formula uses a dynamic shell command to calculate the SHA256 hash during installation, which could lead to inconsistent builds or security issues. The SHA256 should be pre-calculated and hardcoded.

    sha256 `curl -sL https://github.com/MervinPraison/PraisonAI/archive/refs/tags/v2.2.29.tar.gz | shasum -a 256`.split.first
    license "MIT"

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Fix conflicting requirements

    The guardrail function expects at least 50 characters, but the agent is
    instructed to write only 5 words, which will likely be less than 50 characters.
    This will cause the guardrail to reject the response and exhaust the single
    retry attempt.

    src/praisonai-agents/guardrail_agent_example.py [14]

    -agent.start("Write a welcome message with 5 words")
    +agent.start("Write a welcome message with at least 50 characters")
    • Apply / Chat
    Suggestion importance[1-10]: 8

    __

    Why: This identifies a critical logical conflict where the guardrail expects at least 50 characters but the instruction asks for only 5 words, which will consistently fail validation and exhaust the single retry attempt.

    Medium
    Add missing temperature parameter

    The litellm.completion() function is missing the required temperature parameter
    which is needed for consistent quality metrics calculation. Without it, the
    function might use a default temperature that could lead to inconsistent
    results.

    src/praisonai-agents/praisonaiagents/memory/memory.py [919-924]

     # Use LiteLLM for consistency with the rest of the codebase
     import litellm
     
     # Convert model name if it's in litellm format
     model_name = llm or "gpt-4o-mini"
     
     response = litellm.completion(
         model=model_name,
    +    temperature=0.2,  # Add temperature for consistent results
         messages=[{
             "role": "user", 
             "content": custom_prompt or default_prompt
         }],

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 6

    __

    Why: Adding the temperature parameter to litellm.completion() would improve consistency in quality metrics calculation, as the previous OpenAI client might have had different default behavior.

    Low
    Add missing required parameter

    The TaskOutput constructor appears to be missing required parameters. Based on
    how it's used elsewhere, it likely needs the output parameter which was removed
    in this PR.

    src/praisonai-agents/praisonaiagents/agent/agent.py [621-625]

     task_output = TaskOutput(
         description="Agent response output",
         raw=current_response,
    +    output=current_response,
         agent=self.name
     )
    • Apply / Chat
    Suggestion importance[1-10]: 3

    __

    Why: The suggestion questions the removal of the output parameter, but this appears to be an intentional simplification made in the PR rather than an oversight requiring correction.

    Low
    • More

    Copy link
    Copy Markdown

    @cursor cursor bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Bug: Guardrail Conflict in Example Code

    The example src/praisonai-agents/guardrail_agent_example.py contains a logical inconsistency. The agent is instructed to write a 5-word message, but the configured guardrail requires content to be at least 50 characters long. This incompatibility causes the guardrail validation to consistently fail, making the example non-functional.

    src/praisonai-agents/guardrail_agent_example.py#L2-L14

    def validate_content(data):
    if len(str(data)) < 50:
    return False, "Content too short"
    return True, data
    agent = Agent(
    instructions="You are a writer",
    guardrail=validate_content,
    max_guardrail_retries=1
    )
    agent.start("Write a welcome message with 5 words")

    Fix in Cursor


    BugBot free trial expires on June 9, 2025
    You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.

    Was this report helpful? Give feedback by reacting with 👍 or 👎

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    The pull request updates the PraisonAI version, enhances agent and memory modules, adds new examples and tests, and updates dependencies. Overall, the changes seem well-structured and address the described objectives. Here are some specific comments and suggestions.

    Summary of Findings

    • TaskOutput instantiation: The TaskOutput instantiation is being modified, with several fields being removed. Verify that this change does not negatively impact any existing functionality.
    • Default LLM model: The default LLM model gpt-4o-mini is hardcoded in multiple places. Consider making it configurable and adding more descriptive comments.
    • Unused llm_config: The llm_config dictionary in test.py is defined but not used. Consider removing it or using it in the test.

    Merge Readiness

    The pull request is almost ready for merging. Addressing the comments related to the TaskOutput instantiation and the default LLM model would improve the code quality and maintainability. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. In particular, I would recommend that the author address the medium severity issues before merging.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    1 participant